Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "National Institute of Standards"


25 mentions found


Polar ice melt driven by climate change is affecting Earth's rotation, according to new research. A human-driven change in the Earth's rotation has never been seen before, and may affect computing. DrPixel/Getty ImagesDon't worry — this change in Earth's rotation won't be catastrophic. Denis Tangney Jr./Getty ImagesAs a result, scientists predict that we would need the first-ever negative leap second by 2026. iStock / Getty Images PlusThere are three main mechanisms that control the Earth's spin:One is tidal friction, or the interaction between moving ocean water and the ocean floor, which slows Earth's rotation.
Persons: Duncan Agnew, what's, Denis Tangney Jr, Felicitas Arias, Judah Levine, Agnew, Andres Forza, you've Organizations: Service, Scripps Institution of Oceanography, International Bureau, Time Department, National Institute of Standards, Technology, Washington Post, Northern, Reuters, CNN Locations: Wellesley , Massachusetts, Needham, Northern Canada, Scandinavia, Argentina
Clocks may have to skip a second — called a “negative leap second” — around 2029, a study in the journal Nature said Wednesday. “We are headed toward a negative leap second," said Dennis McCarthy, retired director of time for the U.S. Without the effect of melting ice, Earth would need that negative leap second in 2026 instead of 2029, Agnew calculated. In 2012, some computer systems mishandled the leap second, causing problems for Reddit, Linux, Qantas Airlines and others, experts said. Then add in the “weird” effect of subtracting, not adding a leap second, Agnew said.
Persons: , Duncan Agnew, “ It’s, Agnew, Dennis McCarthy, Judah Levine, McCarthy, timekeepers, ” Levine, ” McCarthy, Levine, , It’s, it’s, ___ Read, Seth Borenstein Organizations: Nature, Scripps Institution of Oceanography, University of California, U.S . Naval, National Institute of Standards, Technology, , Linux, Qantas Airlines, Tech, Google, Amazon, Associated Press Locations: San Diego, AP.org
Business Insider reviewed "Unmasking AI: My Mission to Protect What Is Human in a World of Machines." Readers come away understanding AI, how it can perpetuate bias, and what we can do about it. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Buolamwini especially documents the coded gaze applied to facial recognition AI technologies, which serve as the subject for much of Buolamwini's research. There, 90% of tenants were people of color, mostly women and older adults — all groups that facial recognition has been scientifically proven to be less accurate on.
Persons: Joy Buolamwini, , Buolamwini, flexes, Joe Biden, it's, Buolamwini's, Simone Biles, Let's Organizations: Service, MIT, Business, Georgia Tech, Economic, National Institute of Standards, Technology Locations: Mississippi, Ghanaian, Brooklyn, Davos
The spike in AI lobbying comes amid growing calls for AI regulation and the Biden administration's push to begin codifying those rules. Until 2017, the number of organizations that reported AI lobbying stayed in the single digits, per the analysis, but the practice has grown slowly but surely in the years since, exploding in 2023. The data showed a range of industries as new entrants to AI lobbying: Chip companies like AMD and TSMC , venture firms like Andreessen Horowitz, biopharmaceutical companies like AstraZeneca, conglomerates like Disney and AI training data companies like Appen. Organizations that reported lobbying on AI issues last year also typically lobby the government on a range of other issues. In its Request for Information, the Institute specifically asked responders to weigh in on developing responsible AI standards, AI red-teaming, managing the risks of generative AI and helping to reduce the risk of "synthetic content" (i.e., misinformation and deepfakes).
Persons: OpenSecrets, Biden, ByteDance, Andreessen Horowitz, government's, — CNBC's Mary Catherine Wellons, Megan Cassella Organizations: CNBC, Spotify, Samsung, Nvidia, Big Tech, AMD, U.S . Department of Commerce's National Institute of Standards, Technology, NIST Locations: U.S
WASHINGTON (AP) — The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government. Chief among the 90-day goals from the order was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests. The government's National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October. The Commerce Department has developed a draft rule on U.S. cloud companies that provide servers to foreign AI developers. The government also has scaled up the hiring of AI experts and data scientists at federal agencies.
Persons: , Biden, Joe Biden, Ben Buchanan, ” Buchanan, “ We’re, Organizations: WASHINGTON, White, AI, Defense, Commerce Department, White House, National Institute of Standards, Technology, European Union, Transportation, Treasury, Health, Human Services
President Joe Biden unveiled a new executive order on artificial intelligence — the U.S. government's first action of its kind — requiring new safety assessments, equity and civil rights guidance and research on AI's impact on the labor market. Working with international partners to implement AI standards around the world. to implement AI standards around the world. It also comes ahead of the an AI safety summit hosted by the U.K.. President Biden's executive order requires that large companies share safety test results with the U.S. government before the official release of AI systems.
Persons: Joe Biden, government's, it's, Staff Bruce Reed, Biden's, Biden Organizations: Calif, White House, Commerce Department, Department of Health, Human Services, House, Staff, U.K, U.S, National Institute of Standards, Commerce, Sunday Locations: San Francisco, U.S, AI.gov
AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security. Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release. The official briefed reporters on condition of anonymity, as required by the White House. “He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview.
Persons: Joe Biden, Biden, Jeff Zients, ” Zients, , , Bruce Reed, David, Tom Cruise, Reed, Rishi Sunak, Kamala Harris, ReNika Moore, Suresh Venkatasubramanian, ” Venkatasubramanian Organizations: WASHINGTON, Democratic, National Institute of Standards, Technology, Commerce Department, White, AI, European, Google, Meta, Microsoft, American Civil Liberties Union, Biden Locations: Maine, Israel, San, U.S, European Union, China, Britain, West
President Biden just signed an executive order regarding the development and use of AI technology. The broad executive order touches on more than a dozen possible uses of AI and generative AI that already are, or could in the future, directly impact people's lives. The size threshold is so high that currently most available models do not meet the criteria for further transparency called for in Biden's executive order. Although all of the major tech companies earlier this year agreed to adhere to standards of responsibility and training in their AI work. Do you think AI is in a hype cycle, and everybody's overreacting to what it's going to mean?
Persons: Biden, Ben Buchanan, , Buchanan, Andrew Bosworth, We're, everybody's, Ben, Kali Hays Organizations: Service, White, Office of Science, Technology, Monday, Meta, Google, Microsoft, National Institute of Standards, Biden White House, Department of Commerce . Technology, Atomic Energy, Defense, EO Locations: United States, khays@insider.com
Reuters reviewed a confidential draft of the 10-member Association of Southeast Asian Nations' (ASEAN) "guide to AI ethics and governance," whose content has not previously been reported. In contrast to the EU's AI Act, the ASEAN "AI guide" asks companies to take countries' cultural differences into consideration and doesn’t prescribe unacceptable risk categories, according to the current version reviewed. With almost 700 million people and over a thousand ethnic groups and cultures, Southeast Asian countries have widely divergent rules governing censorship, misinformation, public content and hate speech that would likely affect AI regulation. The ASEAN guide advises companies to put in place an AI risk assessment structure and AI governance training, but leaves specifics to companies and local regulators. EU officials and lawmakers told Reuters that the bloc would continue to hold talks with Southeast Asian states to align over broader principles.
Persons: Stephen Braim, Alexandra van Huffelen, Fanny Potkin, Supantha Mukherjee, Panu, Sam Holmes Organizations: Reuters, Association of Southeast Asian Nations, ASEAN Digital, Companies, IBM, Google, ASEAN, Technology, United States, NIST, U.S . Department of Commerce's National Institute of Standards, Meta, Southeast, EU, European Commission, Thomson Locations: SINGAPORE, STOCKHOLM, Thailand, United, Southeast Asia, Japan, South Korea, Brussels, Singapore, Stockholm, Bangkok
"Emerging technologies such as artificial intelligence hold both enormous potential and enormous peril," Biden said at the U.N. on Tuesday. "We need to be sure they're used as tools of opportunity, not as weapons of oppression. The discussion is taking place with the backdrop of an intense competition with China, which is also seeking to be a world leader in the technology. In the meantime, several agencies have asserted their ability to rein in the abuses of AI with existing legal power. The Biden administration has also secured voluntary commitments from leading AI companies to test their tools for security before they release them to the public.
Persons: Joe Biden, Biden, Chuck Schumer, Elon Musk, Mark Zuckerberg, Schumer, Jeffrey Sachs Organizations: United Nations General Assembly, European Union, National Institute of Standards, Technology, U.S . Department of Commerce Locations: United States, U.S, China, Russia
[1/4] A damaged vehicle is pictured in the fire ravaged town of Lahaina on the island of Maui in Hawaii, U.S., August 15, 2023. Joseph Schilling, 66, enjoyed hunting for bullet shells and was known by his loved ones for making delicious sugar toast. Both died last week in the wildfires that scorched Maui and were among the first victims identified by authorities and family members in the heart-wrenching days afterward. As names of the dead are shared, an early pattern has emerged: Many who perished were over the age of 65. The other victims identified so far by officials are Melva Benjamin, 71; Robert Dyckman, 74; Buddy Jantoc, 79; Alfredo Galinato, 79; and Virginia Dofa, 90.
Persons: Mike Blake, Donna Gomes, Joseph Schilling, Schilling, Joe, Akiva Bluh, Bluh, Gomes, Tehani Kuhaulua, Benjamin, Robert Dyckman, Buddy Jantoc, Alfredo Galinato, Virginia Dofa, Galinato, KITV, Joshua Galinato, He's, Daniel Trotta, Brendan O'Brien, Don Durfee, Cynthia Osterman, Jonathan Oatis Organizations: REUTERS, U.S . Fire Administration, Research, National Institute of Standards, Technology, Fire Administration, ABC, Thomson Locations: Lahaina, Maui, Hawaii, U.S, Virginia
To receive the U.S. Cyber Trust Mark, companies will have to follow cybersecurity standards set by the National Institute of Standards and Technology (NIST), such as requiring strong passwords and software updates. Other agencies across the executive branch also plan to get involved in making connected devices more secure, according to the announcement. For example, the Department of Energy will collaborate with National Labs and industry to create cybersecurity labeling standards for smart meters and power inverters. And the Department of State plans to engage allies in syncing up cybersecurity labeling standards and creating international recognition of such labels. Once completed, the FCC could choose to use the standards to apply the new label to these products as well.
Persons: Biden Organizations: U.S, U.S . Cyber, Federal Communications Commission, Google, LG Electronics, Logitech, Samsung, National Institute of Standards, Technology, NIST, FCC, Infrastructure Security Agency, Department of Energy, National Labs, Department of State, CNBC, YouTube Locations: cyberattacks, U.S
On Tuesday, the Biden administration announced it’s moving to implement a cybersecurity labeling program aimed at helping consumers pick out trustworthy tech products that are rated as more secure than the competition. Products certified under the new program may come with a QR code that links to a national database affirming its participation, the administration added in a release. “This new labeling program would help provide Americans with greater assurances about the cybersecurity of the products they use and rely on in their everyday lives,” the administration said in a statement. “It would also be beneficial for businesses, as it would help differentiate trustworthy products in the marketplace.”The government proposal comes two years after President Joe Biden signed an executive order calling for an “‘energy star’ type of label” for tech products. “Market forces alone were never going to be sufficient to force manufacturers to step up and deliver more secure devices,” he said.
Persons: Biden, it’s, , cybersecurity, , Joe Biden, Dave DeWalt, “ We’ve Organizations: CNN, National Institute of Standards, Technology, NIST, House, Products, Twitter, PayPal, Federal Communications Commission, FCC, Colonial Pipeline, Companies, Amazon, Cisco, Google, LG, Logitech, Samsung, Consumer Technology Association
US to launch working group on generative AI, address its risks
  + stars: | 2023-06-22 | by ( ) www.reuters.com   time to read: +1 min
WASHINGTON, June 22 (Reuters) - A U.S. agency will launch a public working group on generative artificial intelligence (AI) to help address the new technology's opportunities while developing guidance to confront its risks, the Commerce Department said on Thursday. The National Institute of Standards and Technology (NIST), a nonregulatory agency that is part of the Commerce Department, said the working group will draw on technical expert volunteers from the private and public sectors. "This new group is especially timely considering the unprecedented speed, scale and potential impact of generative AI and its potential to revolutionize many industries and society more broadly," NIST Director Laurie Locascio said. Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and images, and whose impact has been compared to that of the internet. Reporting by Rami Ayyub; editing by Jonathan OatisOur Standards: The Thomson Reuters Trust Principles.
Persons: Laurie Locascio, Joe Biden, Rami Ayyub, Jonathan Oatis Organizations: Commerce Department, National Institute of Standards, Technology, NIST, Regulators, Thomson Locations: U.S
Google and OpenAI, two U.S. leaders in artificial intelligence, have opposing ideas about how the technology should be regulated by the government, a new filing reveals. Google is one of the leading developers of generative AI with its chatbot Bard, alongside Microsoft -backed OpenAI with its ChatGPT bot. While OpenAI CEO Sam Altman touted the idea of a new government agency focused on AI to deal with its complexities and license the technology, Google in its filing said it preferred a "multi-layered, multi-stakeholder approach to AI governance." "At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a 'Department of AI,'" Google wrote in its filing. "There is this question of should there be a new agency specifically for AI or not?"
Persons: Bard, Sam Altman, Emily M, Bender, Brad Smith, Greg Brockman, Ilya Sutskever, execs, Global Affairs Kent Walker, he's, Helen Toner, OpenAI Organizations: Google, National Telecommunications, Washington Post, Microsoft, National Institute of Standards, Technology, NIST, AI, FDA, University of Washington's Computational, Laboratory, Twitter, International Atomic Energy Agency, Post, Global Affairs, Georgetown's Center for Security, Emerging Technology, CNBC
In May, Samsung banned the use of generative AI tools after the company discovered an employee uploaded sensitive code to ChatGPT. This legal mindset persists, which is one of the reasons Ironclad moved forward with generative AI policy so swiftly. Singh said, "It truly needs to be a multi-stakeholder dialogue," including teams from policy, AI, risk and compliance and legal. Creating a generative AI policy is also a good opportunity for companies to scrutinize all of their technology policies, including implementation, change management, and long-term usage. It can also include using generative AI tools as foundational content for work due to the fact it creates inherent bias.
Persons: Jakub Porzycki, Jason Boehmig, Boehmig, Navrina Singh, Singh, Vince Lynch, Lynch, it's Organizations: Samsung, Nurphoto, Getty, Companies, National AI Advisory, Intelligence, National Institute of Standards, Technology, European Union
For decades, “the rule of law and a commitment to democracy has kept technology in its proper place,” Smith said. Microsoft vice chair and president Brad Smith speaks at the Semafor World Economic Summit on April 12, 2023 in Washington, DC. That framework, which Congress first ordered with legislation in 2020, covers ways that companies can use AI responsibly and ethically. Such an order would leverage the US government’s immense purchasing power to shape the AI industry and encourage the voluntary adoption of best practices, Smith said. Smith’s remarks, and a related policy paper, come a week after Google released its own proposals calling for global cooperation and common standards for artificial intelligence.
Persons: Biden, Brad Smith, Smith, , ” Smith, , OpenAI, Drew Angerer, Joe Biden, Smith’s, ” Kent Walker Organizations: CNN, Microsoft, IBM, National Institute of Standards, Technology, NIST, Google Locations: Washington, China, Europe, United States, , Washington ,
Microsoft outlines its vision for keeping A.I. in check
  + stars: | 2023-05-25 | by ( Lauren Feiner | ) www.cnbc.com   time to read: +2 min
The principles Microsoft President Brad Smith announced Thursday are:— Installing and building on AI safety frameworks led by the government, such as the U.S. National Institute of Standards and Technology AI Risk Management Framework. — Requiring safety breaks when AI is used to control critical infrastructure. Smith suggested AI services should adopt a framework from the financial services sector: Know Your Customer, or KYC. Microsoft has said it's investing billions of dollars into OpenAI as it seeks to be a leader in the field. WATCH: Microsoft bringing an A.I.
Persons: Brad Smith, Donald Trump, Smith, Sam Altman, OpenAI, Bing Organizations: White, Microsoft, U.S . National Institute of Standards, Technology, Washington , D.C, CNBC, YouTube Locations: Washington, Washington ,
U.S. senator introduces bill targeting AI's shortfalls
  + stars: | 2023-04-28 | by ( ) www.reuters.com   time to read: +1 min
WASHINGTON, April 28 (Reuters) - Senator Michael Bennet introduced a bill on Thursday that would create a task force to look at U.S. policies on artificial intelligence, and identify how best to reduce threats to privacy, civil liberties and due process. In Washington, national security experts have fretted about its use by foreign adversaries, and teachers have complained about it being used to cheat. The job of the AI Task Force, which could include cabinet members, will be to identify shortfalls in regulatory oversight of AI and recommend reforms, if needed. Under the terms of the bill, the task force would work for 18 months, issue a final report and then shut down. Reporting by Diane Bartz; Editing by Sandra MalerOur Standards: The Thomson Reuters Trust Principles.
More than 200 companies from myriad sectors have applied for CHIPS Act funding, Commerce Secretary Gina Raimondo told CNBC's Jim Cramer on Friday, as part of what Raimondo called a mandate to invest "$50 billion in America's national security." The recipient companies are varied, but Raimondo told Cramer that some of the funds will be routed to "packaging companies, leading edge companies" within the U.S. "It has to be spent in America," the Commerce secretary told Cramer. "If you take our money, you can't expand in China for leading-edge" chips, Raimondo said. More than half of the applications cover the first tranche of CHIPS Act funding, which include mature-node and leading-edge chip facilities.
How facial recognition is helping Putin curb dissent
  + stars: | 2023-03-28 | by ( ) www.reuters.com   time to read: +8 min
There officers told the 51-year-old bank employee that the metro’s facial recognition system had flagged him for detention because of his political activism. Facial recognition is now helping police to identify and sweep up the Kremlin’s opponents as a preventive measure, whenever they choose. The facial recognition system in Moscow is powered by algorithms produced by one Belarusian company and three Russian firms. All but one said they understood from officers that they were flagged for detention by facial recognition. Facial recognition technology uses artificial intelligence algorithms to analyse and identify faces.
Newsletter Sign-up WSJ Pro Cybersecurity Cybersecurity news, analysis and insights from WSJ's global team of reporters and editors. Part of the delay, he said, was in getting details from the cloud company, which he declined to name. Cybersecurity companies should be held to a higher standard than others in relaying information about hacks quickly and thoroughly, Mr. Toubba said. The lessons learned from cyberattacks can be just as important as how a company responds to a breach, security chiefs say. LastPass has also rolled out several security tools in its infrastructure, data center and cloud systems, Mr. Toubba said.
OAKLAND, Calif., Feb 14 (Reuters) - Sandbox AQ, a startup spun off from Alphabet Inc (GOOGL.O) last year, said on Tuesday it raised $500 million as it helps customers prepare for a quantum computing future. Quantum computers, whose processors run based on quantum physics, could one day carry out certain calculations millions of times quicker than today's fastest super computers, yet they remain years away from making a big change, such as breaking encryption. The simulation does not currently need quantum computers to work, said Hidary. When quantum computers are ready, that work would speed up even further. Sandbox AQ is also using existing types of sensors based on quantum physics.
Metric measurements — also known as the International System of Units, or SI — are managed by a formal international organization. The prefixes Brown came up with are ronna and ronto for 1027 and 10-27 and quetta and quecto for 1030 and 10-30. In fact, SI units used to be based on actual, real-life physical artifacts. "If everyone sticks to SI prefixes," Brown says, "you don't have to go on Wikipedia to find out how long a light-year is or the power in 1 jansky." Pražák proposed sticking to Greek words and letters.
But this rapid development also brings risk: Future quantum computers could crack the encryption schemes that safeguard valuable data, like health records and financial data. One immediate concern: 'Harvest-now, hack-later' attacks — where sensitive encrypted data is stolen today for decryption using future quantum computers. The good news is that quantum-safe cryptography, capable of protecting this information, exists today. So, modern encryption methods often use large numbers as codes, such that their prime factors form a key. But they should also understand the risk of future fault-tolerant quantum computers, and explore quantum-safe cryptography to protect their data and systems.
Total: 25